翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

hinge loss : ウィキペディア英語版
hinge loss

In machine learning, the hinge loss is a loss function used for training classifiers. The hinge loss is used for "maximum-margin" classification, most notably for support vector machines (SVMs).
For an intended output and a classifier score , the hinge loss of the prediction is defined as
:\ell(y) = \max(0, 1-t \cdot y)
Note that should be the "raw" output of the classifier's decision function, not the predicted class label. For instance, in linear SVMs, y = \mathbf \cdot \mathbf + b, where (\mathbf,b) are the parameters of the hyperplane and \mathbf is the point to classify.
It can be seen that when and have the same sign (meaning predicts the right class) and |y| \ge 1, the hinge loss \ell(y) = 0, but when they have opposite sign, \ell(y) increases linearly with (one-sided error).
==Extensions==
While SVMs are commonly extended to multiclass classification in a one-vs.-all or one-vs.-one fashion,
there exists a "true" multiclass version of the hinge loss due to Crammer and Singer,
defined for a linear classifier as
:\ell(y) = \max(0, 1 + \max_ \mathbf_t \mathbf - \mathbf_y \mathbf)
In structured prediction, the hinge loss can be further extended to structured output spaces. Structured SVMs with margin rescaling use the following variant, where denotes the SVM's parameters, the joint feature function, and the Hamming loss:
:\begin
\ell(\mathbf) & = \max(0, \Delta(\mathbf, \mathbf) + \langle \mathbf, \phi(\mathbf, \mathbf) \rangle - \langle \mathbf, \phi(\mathbf, \mathbf) \rangle) \\
& = \max(0, \max_, \mathbf) + \langle \mathbf, \phi(\mathbf, \mathbf) \rangle \right) - \langle \mathbf, \phi(\mathbf, \mathbf) \rangle)
\end

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「hinge loss」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.